The 'Godfather of AI' doesn’t share the industry’s optimism.AI 

Geoffrey Hinton Doubts Good AI Will Overcome Bad AI

Geoffrey Hinton, a renowned professor at the University of Toronto and a leading figure in the field of artificial intelligence, has taken on the role of an unofficial overseer for the industry. Widely recognized as the “Godfather of AI” due to his groundbreaking work on neural networks, Hinton decided to step away from his position at Google in order to have more freedom in critiquing the field he helped shape. He expressed concerns about the unchecked and potentially hazardous advancement in generative AIs, exemplified by the emergence of ChatGPT and Bing Chat. In contrast, Google appeared to be abandoning its previous caution as it pursued competitors with products like its Bard chatbot.

At the Collision conference in Toronto this week, Hinton expanded on his concerns. When companies touted AI as a solution for everything from signing a lease to delivering goods, Hinton sounded the alarm. He is not convinced that a good AI will come out victorious over a bad variety, and he believes that the ethical implementation of AI can be expensive.

A threat to humanity

University of Toronto Professor Geoffrey Hinton (left) speaking at Collision 2023. (Image credit: Photo by Jon Fingas/Engadget)
University of Toronto Professor Geoffrey Hinton (left) speaking at Collision 2023. (Image credit: Photo by Jon Fingas/Engadget)

Hinton argued that AI was only as good as the people who made it, and that bad technology could still win. “I’m not convinced that a good AI trying to stop evil can gain control,” he explained. It might be hard to stop the military-industrial complex from making combat robots, for example, he says — companies and militaries might “love” wars in which the victims are machines that can be easily replaced. And while Hinton believes that large language models (a trained AI that produces human-like text, such as OpenAI’s GPT-4) could lead to massive productivity gains, he worries that the ruling class could simply exploit this to enrich and expand already large wealth gap. It would make the rich richer and the poor poorer, Hinton said.

Hinton also reiterated his much-publicized view that artificial intelligence could pose an existential risk to humanity. If AI becomes smarter than humans, there is no guarantee that humans will remain in charge. “We’re in trouble” if an AI decides that taking control is necessary to achieve its goals, Hinton said. To him, threats are not just science fiction; they must be taken seriously. He worries that society would only curb the killer robots after getting a chance to see how terrible they were.

There are a lot of problems, Hinton added. He argues that bias and discrimination are still problems, as biased AI training data can produce unfair results. Algorithms also create echo chambers that reinforce misinformation and mental health issues. Hinton is also concerned about artificial intelligence spreading misinformation outside of these chambers. He’s not sure if it’s possible to catch every false claim, although it’s “important to flag every fake as a fake.”

That’s not to say Hinton despairs of AI’s impact, though he warns that healthy use of the technology could come at a cost. Humans may need to do some “empirical work” to understand how AI can go wrong and prevent it from taking over. It is already “possible” to correct the bias, he added. Broad language model artificial intelligence may put an end to echo chambers, but Hinton sees the company’s policy changes as particularly important.

The professor did not mince words in his answers to questions about losing his job due to automation. He thinks that “socialism” is needed to eliminate inequality and that people could protect themselves against unemployment by taking up professions that can change over time, like Plumbing (and no, he’s not kidding). Society may have to make extensive changes to adapt to artificial intelligence.

The industry remains optimistic

Google DeepMind CBO Colin Murdoch at Collision 2023. (Image credit: Photo by Jon Fingas/Engadget)
Google DeepMind CBO Colin Murdoch at Collision 2023. (Image credit: Photo by Jon Fingas/Engadget)

Earlier discussions at Collision were more hopeful. Colin Murdoch, head of Google DeepMind, said in a separate discussion that artificial intelligence is solving some of the world’s toughest challenges. There isn’t much controversy in this field – DeepMind catalogs all known proteins, fights antibiotic-resistant bacteria, and even accelerates the development of malaria vaccines. He envisioned a “general artificial intelligence” that could solve a range of problems, citing Google’s products as an example. Lookout is useful for capturing images, but the underlying technology also makes YouTube Shorts searchable. Murdoch went so far as to call the past 6 to 12 months AI’s “lightbulb moment” that unlocked its potential.

Roblox Chief Scientist Morgan McGuire largely agrees. He believes the game platform’s generative AI tools “closed the gap” between new creators and veterans, making it easier to write code and create in-game materials. Roblox is even releasing an open-source AI model, StarCoder, which it hopes will help others by making large language models more accessible. While McGuire acknowledged challenges in scaling and moderating content in the discussion, he believes the metaverse holds “limitless” possibilities thanks to the creative crowd.

Both Murdoch and McGuire expressed some of the same concerns as Hinton, but their tone was decidedly less alarmist. Murdoch emphasized that DeepMind wanted “safe, ethical and inclusive” AI, and pointed to expert consultations and investment in training as evidence. The executive insists he is open to regulation, but only as long as it enables an “amazing breakthrough.” McGuire, for his part, said Roblox always released generative AI tools with content moderation, relied on diverse datasets and practiced transparency.

Some hope for the future

Roblox Chief Scientist Morgan McGuire speaks at Collision 2023. (Image credit: Photo by Jon Fingas/Engadget)
Roblox Chief Scientist Morgan McGuire speaks at Collision 2023. (Image credit: Photo by Jon Fingas/Engadget)

Despite the headlines surrounding his recent comments, Hinton’s general enthusiasm for AI hasn’t waned since leaving Google. If he hadn’t quit, he was sure he’d be working on multimodal AI models where vision, language, and other cues aid in decision-making. “Young children don’t learn from language alone,” he said, suggesting that machines could do the same. Although he worries about the dangers of artificial intelligence, he believes it could eventually do everything a human can, and he already showed “little bits of reasoning.” GPT-4 can adapt to solve more difficult puzzles, for example.

Hinton admits that his crash speech didn’t talk much about the good uses of artificial intelligence, such as combating climate change. The development of AI technology was probably healthy, although it was still important to worry about the consequences. And Hinton openly admitted that his enthusiasm has not waned despite the looming ethical and moral dilemmas. “I love this stuff,” he said. “How can you not love doing smart things?”

Related posts

Leave a Comment